# Dynamic quantization
Smollm 135M Instruct
Apache-2.0
A lightweight instruction fine-tuned language model optimized for mobile deployment
Large Language Model
S
litert-community
131
1
Deepseek R1 Distill Qwen 32B Unsloth Bnb 4bit
Apache-2.0
DeepSeek-R1 is the first-generation inference model launched by the DeepSeek team. Through large-scale reinforcement learning training, it does not require supervised fine-tuning (SFT) as an initial step and demonstrates excellent inference capabilities.
Large Language Model
Transformers English

D
unsloth
938
10
Deepseek R1 GGUF
MIT
DeepSeek-R1 is a 1.58-bit dynamically quantized large language model optimized by Unsloth, adopting the MoE architecture and supporting English task processing.
Large Language Model English
D
unsloth
2.0M
1,045
Featured Recommended AI Models